Goto

Collaborating Authors

 North Rhine-Westphalia


Amazon says new Vulcan warehouse robot has human touch but wont replace humans

Mashable

This week Amazon debuted a new warehouse robot that has a sense of "touch," but the company also promised its new bot will not replace human warehouse workers. On Monday, at Amazon's Delivering the Future event in Dortmund, Germany, the retail giant introduced the world to Vulcan, a robot designed to sort, pick up, and place objects in storage compartments with the finesse and dexterity of human hands. Instead, the robot's "end of arm tooling" looks like a "ruler stuck onto a hair straightener," as Amazon describes it. The Vulcan warehouse robot is also loaded with cameras and feedback sensors to process when it makes contact with items and how much force to apply to prevent damage. In its warehouses, Amazon's inventory is stored in soft fabric compartments of about one square foot in size.


Amazon makes 'fundamental leap forward in robotics' with device having sense of touch

The Guardian

Amazon said it has made a "fundamental leap forward in robotics" after developing a robot with a sense of touch that will be capable of grabbing about three-quarters of the items in its vast warehouses. Vulcan – which launches at the US firm's "Delivering the Future" event in Dortmund, Germany, on Wednesday and is to be deployed around the world in the next few years – is designed to help humans sort items for storage and then prepare them for delivery as the latest in a suite of robots which have an ever-growing role in the online retailer's extensive operation. Aaron Parness, Amazon's director of robotics, described Vulcan as a "fundamental leap forward in robotics. It's not just seeing the world, it's feeling it, enabling capabilities that were impossible for Amazon robots until now." The robots will be able to identify objects by touch using AI to work out what they can and can't handle and figuring out how best to pick them up.


Approximately Pareto-optimal Solutions for Bi-Objective k-Clustering Problems Jan Eube Heinrich Heine University Düsseldorf University of Bonn Düsseldorf, Germany

Neural Information Processing Systems

As a major unsupervised learning method, clustering has received a lot of attention over multiple decades. The various clustering problems that have been studied intensively include, e.g., the k-means problem and the k-center problem. However, in applications, it is common that good clusterings should optimize multiple objectives (e.g., visualizing data on a map by clustering districts into areas that are both geographically compact but also homogeneous with respect to the data). We study combinations of different objectives, for example optimizing k-center and k-means simultaneously or optimizing k-center with respect to two different metrics. Usually these objectives are conflicting and cannot be optimized simultaneously, making it necessary to find trade-offs. We develop novel algorithms for approximating the set of Pareto-optimal solutions for various combinations of two objectives. Our algorithms achieve provable approximation guarantees and we demonstrate in several experiments that the approximate Pareto front contains good clusterings that cannot be found by considering one of the objectives separately.


Domain-incremental White Blood Cell Classification with Privacy-aware Continual Learning

arXiv.org Artificial Intelligence

White blood cell (WBC) classification plays a vital role in hematology for diagnosing various medical conditions. However, it faces significant challenges due to domain shifts caused by variations in sample sources (e.g., blood or bone marrow) and differing imaging conditions across hospitals. Traditional deep learning models often suffer from catastrophic forgetting in such dynamic environments, while foundation models, though generally robust, experience performance degradation when the distribution of inference data differs from that of the training data. To address these challenges, we propose a generative replay-based Continual Learning (CL) strategy designed to prevent forgetting in foundation models for WBC classification. Our method employs lightweight generators to mimic past data with a synthetic latent representation to enable privacy-preserving replay. To showcase the effectiveness, we carry out extensive experiments with a total of four datasets with different task ordering and four backbone models including ResNet50, RetCCL, CTransPath, and UNI. Experimental results demonstrate that conventional fine-tuning methods degrade performance on previously learned tasks and struggle with domain shifts. In contrast, our continual learning strategy effectively mitigates catastrophic forgetting, preserving model performance across varying domains. This work presents a practical solution for maintaining reliable WBC classification in real-world clinical settings, where data distributions frequently evolve.



The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval

arXiv.org Artificial Intelligence

The CheckThat! lab aims to advance the development of innovative technologies designed to identify and counteract online disinformation and manipulation efforts across various languages and platforms. The first five editions focused on key tasks in the information verification pipeline, including check-worthiness, evidence retrieval and pairing, and verification. Since the 2023 edition, the lab has expanded its scope to address auxiliary tasks that support research and decision-making in verification. In the 2025 edition, the lab revisits core verification tasks while also considering auxiliary challenges. Task 1 focuses on the identification of subjectivity (a follow-up from CheckThat! 2024), Task 2 addresses claim normalization, Task 3 targets fact-checking numerical claims, and Task 4 explores scientific web discourse processing. These tasks present challenging classification and retrieval problems at both the document and span levels, including multilingual settings.


Superalignment with Dynamic Human Values

arXiv.org Artificial Intelligence

Two core challenges of alignment are 1) scalable oversight and 2) accounting for the dynamic nature of human values. While solutions like recursive reward modeling address 1), they do not simultaneously account for 2). We sketch a roadmap for a novel algorithmic framework that trains a superhuman reasoning model to decompose complex tasks into subtasks that are still amenable to human-level guidance. Our approach relies on what we call the part-to-complete generalization hypothesis, which states that the alignment of subtask solutions generalizes to the alignment of complete solutions. We advocate for the need to measure this generalization and propose ways to improve it in the future.


Halving transcription time: A fast, user-friendly and GDPR-compliant workflow to create AI-assisted transcripts for content analysis

arXiv.org Artificial Intelligence

In qualitative research, data transcription is often labor-intensive and time-consuming. To expedite this process, a workflow utilizing artificial intelligence (AI) was developed. This workflow not only enhances transcription speed but also addresses the issue of AI-generated transcripts often lacking compatibility with standard content analysis software. Within this workflow, automatic speech recognition is employed to create initial transcripts from audio recordings, which are then formatted to be compatible with content analysis software such as ATLAS.ti or MAXQDA. Empirical data from a study of 12 interviews suggests that this workflow can reduce transcription time by up to 46.2%. Furthermore, by using widely used standard software, this process is suitable for both students and researchers while also being adaptable to a variety of learning, teaching, and research environments. It is also particularly beneficial for non-native speakers. In addition, the workflow is GDPR-compliant and facilitates local, offline transcript generation, which is crucial when dealing with sensitive data.


Annotating Scientific Uncertainty: A comprehensive model using linguistic patterns and comparison with existing approaches

arXiv.org Artificial Intelligence

UnScientify, a system designed to detect scientific uncertainty in scholarly full text. The system utilizes a weakly supervised technique to identify verbally expressed uncertainty in scientific texts and their authorial references. The core methodology of UnScientify is based on a multi-faceted pipeline that integrates span pattern matching, complex sentence analysis and author reference checking. This approach streamlines the labeling and annotation processes essential for identifying scientific uncertainty, covering a variety of uncertainty expression types to support diverse applications including information retrieval, text mining and scientific document processing. The evaluation results highlight the trade-offs between modern large language models (LLMs) and the UnScientify system. UnScientify, which employs more traditional techniques, achieved superior performance in the scientific uncertainty detection task, attaining an accuracy score of 0.808. This finding underscores the continued relevance and efficiency of UnScientify's simple rule-based and pattern matching strategy for this specific application. The results demonstrate that in scenarios where resource efficiency, interpretability, and domain-specific adaptability are critical, traditional methods can still offer significant advantages.


AI-Driven Decision Support in Oncology: Evaluating Data Readiness for Skin Cancer Treatment

arXiv.org Artificial Intelligence

Over the past few years, the field of artificial intelligence (AI) has shown great promise in various domains, including medicine. A potential use case for AI in medicine is its application in managing advanced-stage cancer treatment, where limited evidence often makes treatment choices reliant on the personal expertise of the physicians. The complex nature of oncological disease processes and the multitude of factors that need to be considered when making treatment decisions make it difficult to rely solely on evidence-based trial data, which is often limited and may exclude certain patient populations. This results in physicians making decisions on a case-by-case basis, drawing on their experience of previous cases, which is not always objective and may be limited by the small number of cases they have observed. In this context, the use of clinical decision support systems (CDSS) using similaritybased AI approaches can potentially contribute to better oncology treatment by supporting physicians in the selection of treatment methods [1, 2]. One approach is Case-Based Reasoning (CBR), a subfield of AI that deals with experience-based problem solving.